49 research outputs found

    Computational aerodynamics analysis of non-symmetric multi-element wing in ground effect with humpback whale flipper tubercles

    Get PDF
    The humpback whale flipper tubercles have been shown to improve the aerodynamic coefficients of a wing, especially in stall conditions, where the flow is almost fully detached. In this work, these tubercles were implemented on a F1 front-wing geometry, very close to a Tyrrell wing. Numerical simulations were carried out employing the k−ω SST turbulence model and the overall effects of the tubercles on the flow behavior were analyzed. The optimal amplitude and number of tubercles was determined in this study for this front wing where an improvement of 22.6% and 9.4% is achieved, respectively, on the lift and the L/D ratio. On the main element, the stall was delayed by 167.7%. On the flap, the flow is either fully detached, in the large circulation zone, or fully attached. Overall, in stall conditions, tubercles improve the downforce generation but at the cost of increased drag. Furthermore, as the tubercles are case-dependent, an optimal configuration for tubercles implementation also exists for any geometry

    Aerodynamic and structural design of a 2022 Formula One front wing assembly

    Get PDF
    The aerodynamic loads generated in a wing are critical in its structural design. When multi-element wings with wingtip devices are selected, it is essential to identify and to quantify their structural behaviour to avoid undesirable deformations which degrade the aerodynamic performance. This research investigates these questions using numerical methods (Computational Fluid Dynamics and Finite Elements Analysis), employing exhaustive validation methods to ensure the accuracy of the results and to assess their uncertainty. Firstly, a thorough investigation of four baseline configurations is carried out, employing Reynolds Averaged Navier–Stokes equations and the k-ω SST (Shear Stress Transport) turbulence model to analyse and quantify the most important aerodynamic and structural parameters. Several structural configurations are analysed, including different materials (metal alloys and two designed fibre-reinforced composites). A 2022 front wing is designed based on a bidimensional three-element wing adapted to the 2022 FIA Formula One regulations and its structural components are selected based on a sensitivity analysis of the previous results. The outcome is a high-rigidity-weight wing which satisfies the technical regulations and lies under the maximum deformation established before the analysis. Additionally, the superposition principle is proven to be an excellent method to carry out high-performance structural designs

    Constraints on optimising encoder-only transformers for modelling sign language with human pose estimation keypoint data

    Get PDF
    Supervised deep learning models can be optimised by applying regularisation techniques to reduce overfitting, which can prove difficult when fine tuning the associated hyperparameters. Not all hyperparameters are equal, and understanding the effect each hyperparameter and regularisation technique has on the performance of a given model is of paramount importance in research. We present the first comprehensive, large-scale ablation study for an encoder-only transformer to model sign language using the improved Word-level American Sign Language dataset (WLASL-alt) and human pose estimation keypoint data, with a view to put constraints on the potential to optimise the task. We measure the impact a range of model parameter regularisation and data augmentation techniques have on sign classification accuracy. We demonstrate that within the quoted uncertainties, other than â„“2 parameter regularisation, none of the regularisation techniques we employ have an appreciable positive impact on performance, which we find to be in contradiction to results reported by other similar, albeit smaller scale, studies. We also demonstrate that the model architecture is bounded by the small dataset size for this task over finding an appropriate set of model parameter regularisation and common or basic dataset augmentation techniques. Furthermore, using the base model configuration, we report a new maximum top-1 classification accuracy of 84% on 100 signs, thereby improving on the previous benchmark result for this model architecture and dataset

    Computational engineering analysis of external geometrical modifications on the MQ-1 unmanned combat aerial vehicle

    Get PDF
    This paper focuses on the effects of external geometrical modifications on the aerodynamic characteristics of the MQ-1 predator Unmanned Combat Aerial Vehicle (UCAV) using computational fluid dynamics. The investigations are performed for 16 flight conditions at an altitude of 7.6 km and at a constant speed of 56.32 m/s. Two models are analysed, namely the baseline model and the model with external geometrical modifications installed on it. Both the models are investigated for various angles of attack from −4° to 16°, angles of bank from 0° to 6° and angles of yaw from 0° to 4°. Due to the unavailability of any experimental (wind tunnel or flight test) data for this UCAV in the literature, a thorough verification of calculations process is presented to demonstrate confidence level in the numerical simulations. The analysis quantifies the loss of lift and increase in drag for the modified version of the MQ-1 predator UCAV along with the identification of stall conditions. Local improvement (in drag) of up to 96% has been obtained by relocating external modifications, whereas global drag force reduction of roughly 0.5% is observed. The effects of external geometrical modifications on the control surfaces indicate the blanking phenomenon and reduction in forces on the control surfaces that can reduce the aerodynamic performance of the UCAV

    Sky and ground segmentation in the navigation visions of the planetary rovers

    Get PDF
    Sky and ground are two essential semantic components in computer vision, robotics, and remote sensing. The sky and ground segmentation has become increasingly popular. This research proposes a sky and ground segmentation framework for the rover navigation visions by adopting weak supervision and transfer learning technologies. A new sky and ground segmentation neural network (network in U-shaped network (NI-U-Net)) and a conservative annotation method have been proposed. The pre-trained process achieves the best results on a popular open benchmark (the Skyfinder dataset) by evaluating seven metrics compared to the state-of-the-art. These seven metrics achieve 99.232%, 99.211%, 99.221%, 99.104%, 0.0077, 0.0427, and 98.223% on accuracy, precision, recall, dice score (F1), misclassification rate (MCR), root mean squared error (RMSE), and intersection over union (IoU), respectively. The conservative annotation method achieves superior performance with limited manual intervention. The NI-U-Net can operate with 40 frames per second (FPS) to maintain the real-time property. The proposed framework successfully fills the gap between the laboratory results (with rich idea data) and the practical application (in the wild). The achievement can provide essential semantic information (sky and ground) for the rover navigation vision

    Drone model identification by convolutional neural network from video stream

    Get PDF
    We present a convolutional neural network model that correctly identifies drone models in real-life video streams of flying drones. To achieve this, we show a method of generating synthetic drone images. To create a diverse dataset, the simulation parameters (such as drone textures, lighting, and orientation) are randomized. This synthetic dataset is used to train a convolutional neural network to identify the drone model: DJI Phantom, DJI Mavic, or DJI Inspire. The model is then tested on a real-life Anti-UAV dataset of flying drones. The benchmark results show that the DenseNet201 architecture performed the best. Adding Gaussian noise to the training dataset and performing full training (as opposed to freezing layers) shows the best results. The model shows an average accuracy of 92.4%, and an average precision of 88.6% on the test dataset

    Development of vision guided real-time trajectory planning system for autonomous ground refuelling operations using hybrid dataset

    Get PDF
    Accurate and rapid object localisation and pose estimation are playing key roles during some of the real-time robotic operations such as object grasping and object manipulating. To do so, high-level robotic vision solutions need to be adopted. Computer vision approaches require a large amount of data to be able to create a perception pipeline robustly. Preparing such dataset to train the deep neural network could be challenging as the collection and manual annotation of huge amounts of data can take long hours and the development of the dataset needs to cover different conditions in weather and lighting. To ease this process, generating a synthetic dataset could be used. Due to the limitations of the synthetic dataset which will be described further down, instead of using a sole synthetic dataset, a hybrid dataset can be developed with the real dataset to overcome the limitations of both datasets. Even though the main objective of this study is to fulfil an autonomous nozzle insertion process for the ground refuelling operation of civil aircraft, the proposed approach is generic and can be adapted to any 3D visual robotic manipulation operation. This study is also offered to be the first visual trajectory planning control mechanism depending on the hybrid dataset to this date

    Drone model classification using convolutional neural network trained on synthetic data

    Get PDF
    We present a convolutional neural network (CNN) that identifies drone models in real-life videos. The neural network is trained on synthetic images and tested on a real-life dataset of drone videos. To create the training and validation datasets, we show a method of generating synthetic drone images. Domain randomization is used to vary the simulation parameters such as model textures, background images, and orientation. Three common drone models are classified: DJI Phantom, DJI Mavic, and DJI Inspire. To test the performance of the neural network model, Anti-UAV, a real-life dataset of flying drones is used. The proposed method reduces the time-cost associated with manually labelling drones, and we prove that it is transferable to real-life videos. The CNN achieves an overall accuracy of 92.4%, a precision of 88.8%, a recall of 88.6%, and an f1 score of 88.7% when tested on the real-life dataset

    The influence of micro-expressions on deception detection

    Get PDF
    Facial micro-expressions are universal symbols of emotions that provide cohesion to interpersonal communication. At the same time, the changes in micro-expressions are considered to be the most important hints in the psychology of emotion. Furthermore, analysis and recognition of these micro-expressions have pervaded in various areas such as security and psychology. In security-related matters, micro-expressions are widely used to detect deception. In this research, a deep learning model that interprets the changes in the face into meaningful information has been trained using The Facial Expression Recognition 2013 dataset. Necessary data is also obtained through live stream or video stream by detecting via computer vision and evaluating with the trained model. Finally, the data obtained is transformed into graphic and interpreted to determine whether the people are trying to deceive or not. The deception classification accuracy of the custom trained model is 74.17% and the detection of the face with high precision using the computer vision methods increased the accuracy of the obtained data and provided it to be interpreted correctly. In this respect, the study differs from other studies using the same dataset. In addition, it is aimed to facilitate the deception detection which is performed in a complex and expensive way, by making it simple and understandable

    Rock segmentation in the navigation vision of the planetary rovers

    Get PDF
    Visual navigation is an essential part of planetary rover autonomy. Rock segmentation emerged as an important interdisciplinary topic among image processing, robotics, and mathematical modeling. Rock segmentation is a challenging topic for rover autonomy because of the high computational consumption, real-time requirement, and annotation difficulty. This research proposes a rock segmentation framework and a rock segmentation network (NI-U-Net++) to aid with the visual navigation of rovers. The framework consists of two stages: the pre-training process and the transfer-training process. The pre-training process applies the synthetic algorithm to generate the synthetic images; then, it uses the generated images to pre-train NI-U-Net++. The synthetic algorithm increases the size of the image dataset and provides pixel-level masks—both of which are challenges with machine learning tasks. The pre-training process accomplishes the state-of-the-art compared with the related studies, which achieved an accuracy, intersection over union (IoU), Dice score, and root mean squared error (RMSE) of 99.41%, 0.8991, 0.9459, and 0.0775, respectively. The transfer-training process fine-tunes the pre-trained NI-U-Net++ using the real-life images, which achieved an accuracy, IoU, Dice score, and RMSE of 99.58%, 0.7476, 0.8556, and 0.0557, respectively. Finally, the transfer-trained NI-U-Net++ is integrated into a planetary rover navigation vision and achieves a real-time performance of 32.57 frames per second (or the inference time is 0.0307 s per frame). The framework only manually annotates about 8% (183 images) of the 2250 images in the navigation vision, which is a labor-saving solution for rock segmentation tasks. The proposed rock segmentation framework and NI-U-Net++ improve the performance of the state-of-the-art models. The synthetic algorithm improves the process of creating valid data for the challenge of rock segmentation. All source codes, datasets, and trained models of this research are openly available in Cranfield Online Research Data (CORD)
    corecore